OpenAI, Anthropic, Google unite to combat AI model copying in China

Sign up now: Get ST's newsletters delivered to your inbox

OpenAI has accused Chinese firm DeepSeek of trying to “free-ride on the capabilities developed by OpenAI and other US frontier labs.”

OpenAI has accused Chinese firm DeepSeek of trying to “free-ride on the capabilities developed by OpenAI and other US frontier labs”.

PHOTO: REUTERS

Google Preferred Source badge

- Rivals OpenAI, Anthropic and Alphabet’s Google have begun working together to try to clamp down on Chinese competitors extracting results from cutting-edge US artificial intelligence models to gain an edge in the global AI race. 

The firms are sharing information through the Frontier Model Forum, an industry non-profit that the three tech companies founded with Microsoft in 2023, to detect so-called adversarial distillation attempts that violate their terms of service, according to people familiar with the matter.

The rare collaboration underscores the severity of a concern raised by US AI companies that some users, especially in China, are creating imitation versions of their products that could undercut them on price and siphon away customers while posing a national security risk. US officials have estimated that unauthorised distillation costs Silicon Valley labs billions of dollars in annual profit, according to a person familiar with the findings.

OpenAI confirmed it is part of the information-sharing effort on adversarial distillation through the Frontier Model Forum and pointed to a recent memo it sent to Congress on the practice, where it accused Chinese firm DeepSeek of trying to “free-ride on the capabilities developed by OpenAI and other US frontier labs”. Google, Anthropic and the Frontier Model Forum declined to comment. 

Distillation is a technique where an older “teacher” AI model is used to train a newer “student” model that replicates the capabilities of the earlier system – often at a much lower cost than producing an original model from scratch. Some forms of distillation are widely accepted and even encouraged by AI labs, such as when companies create smaller, more efficient versions of their own models, or allow outside developers to use distillation to build non-competitive technologies.

Yet distillation has been controversial when used by third parties – particularly in adversary nations like China or Russia – to replicate proprietary work without authorisation. Leading US AI labs have warned that foreign adversaries could use the technique to develop AI models stripped of safety guardrails, such as limits that would prevent users from creating a deadly pathogen.

Most models made by Chinese labs are open weight, meaning that parts of the underlying AI system are publicly available for users to freely download and run on their own platforms, and therefore cheaper to use.

That poses an economic challenge for US AI companies that have kept their models proprietary, betting that customers will pay for access to their products and help offset the hundreds of billions of dollars they have spent on data centres and other infrastructure. 

Distillation first drew significant scrutiny in January 2025 in the weeks after DeepSeek’s surprise release of the R1 reasoning model that took the AI world by storm. Soon after, Microsoft and OpenAI investigated whether the Chinese start-up had improperly exfiltrated large amounts of data from the US firm’s models to create R1.

In February, OpenAI warned US lawmakers that DeepSeek had continued to use increasingly sophisticated tactics to extract results from US models, despite heightened efforts to prevent misuse of its products. OpenAI claimed in its memo to the House Select Committee on China that DeepSeek was relying on distillation to develop a new version of its breakthrough chatbot.

Trump administration officials have signalled their openness to fostering information sharing among AI companies to rein in adversarial distillation. The AI Action Plan unveiled by US President Donald Trump in 2025 called for the creation of an information sharing and analysis centre, in part for this purpose.

For now, information sharing on distillation remains limited due to AI companies’ uncertainty about what can be shared under existing antitrust guidance to counter the competitive threat from China, according to people familiar with the matter. The firms would benefit from greater clarity from the US government, the people said. BLOOMBERG

See more on